Goto

Collaborating Authors

 transparent ai


Towards Transparent AI: A Survey on Explainable Large Language Models

Palikhe, Avash, Yu, Zhenyu, Wang, Zichong, Zhang, Wenbin

arXiv.org Artificial Intelligence

Large Language Models (LLMs) have played a pivotal role in advancing Artificial Intelligence (AI). However, despite their achievements, LLMs often struggle to explain their decision-making processes, making them a 'black box' and presenting a substantial challenge to explainability. This lack of transparency poses a significant obstacle to the adoption of LLMs in high-stakes domain applications, where interpretability is particularly essential. To overcome these limitations, researchers have developed various explainable artificial intelligence (XAI) methods that provide human-interpretable explanations for LLMs. However, a systematic understanding of these methods remains limited. To address this gap, this survey provides a comprehensive review of explainability techniques by categorizing XAI methods based on the underlying transformer architectures of LLMs: encoder-only, decoder-only, and encoder-decoder models. Then these techniques are examined in terms of their evaluation for assessing explainability, and the survey further explores how these explanations are leveraged in practical applications. Finally, it discusses available resources, ongoing research challenges, and future directions, aiming to guide continued efforts toward developing transparent and responsible LLMs.


Exploring the Possibilities of AI in 2023 - The Tiche

#artificialintelligence

There are two distinct types of AI. One is reactive machines, which have no memory and cannot use past experiences to inform future decisions. The other is a self-aware AI, which can understand its current state and make inferences about its environment. When choosing an AI system, organizations should consider factors such as its robustness, the likelihood of its errors, and the consequences of its consequences. These factors can help minimize the negative impacts of AI. But it is also important to recognize that the positive consequences of an AI system are possible.


The uses of ethical AI in hiring: Opaque vs. transparent AI

#artificialintelligence

Did you miss a session from MetaBeat 2022? Head over to the on-demand library for all of our featured sessions here. There hasn't been a revolution quite like this before, one that's shaken the talent industry so dramatically over the past few years. The pandemic, the Great Resignation, inflation and now talk of looming recessions are changing talent strategies as we know them. Such significant changes, and the challenge of staying ahead of them, have brought artificial intelligence (AI) to the forefront of the minds of HR leaders and recruitment teams as they endeavor to streamline workflows and identify suitable talent to fill vacant positions faster.


The Case for Transparent AI

#artificialintelligence

I can trace it back to when I watched a video of America's Got Talent. It started with singers, but soon it moved on to other categories, including illusionists. That was enough to tell Facebook's algorithms that I had to be interested in magic and that it should show me more of what it deduced I wanted to see. Now I have to be careful, because if I click on any of that content, it will reinforce the algorithm's notion that I must really be interested in card tricks, and pretty soon that's all Facebook will ever show me. Even if it was all just a passing curiosity.


Researchers were about to solve AI's black box problem, then the lawyers got involved

#artificialintelligence

AI has a "black box" problem. We cram data in one side of a machine learning system and we get results out the other, but we're often unsure what happens in the middle. Researchers and developers nearly had the issue licked, with "explainable algorithms" and "transparent AI" trending over the past few years. Black box AI isn't as complex as some experts make it out to be. Imagine you have 1,000,000 different spices and 1,000,000 different herbs and you only have a couple of hours to crack Kentucky Fried Chicken's secret recipe.


A call for transparent AI: 'computer says no' is not enough

#artificialintelligence

Artificial intelligence (AI) models can become so complex that we no longer understand the output. AI expert Evert Haasdijk explains the importance of transparent AI. Media coverage of AI tends to be either euphoric or alarming. In the first variant, AI is presented as a divine technology that will solve all our problems, from curing cancer to ending global warming. In the latter, Frankenstein- or Terminator-inspired narratives depict AI as a technology that we cannot keep under control and will be out-smarting humans in ways we cannot foresee – killing our jobs, if not threatening the survival of humanity.

  Country: Europe > Netherlands > North Holland > Amsterdam (0.05)
  Industry: Law (0.31)

Insurers Beware – Be Blasé About Big Data at Your Peril

#artificialintelligence

When the new FCA chairman Charles Randell voiced his opinion about big data and artificial intelligence (AI) in July this year, it certainly upset the apple cart for a number of individuals across the insurance sector. Mr Randell highlighted his concerns and proposed future regulations for insurers and other financial services firms using the aforementioned technologies. As digital transformation takes hold of every industry, the exploration of big data and AI and its integration into insurance services and products has also increased. Consequently, players in the sector should be prepared to reconsider the AI basics and clarify the ground rules to ensure their customers are not left exposed to the mismanagement of personal data or a security breach. It is important that insurance companies take this challenge seriously so they themselves can avoid another scandal such as that experienced with Cambridge Analytica.


Who is driving the AI agenda and what do they stand to gain?

#artificialintelligence

From the critical, like law enforcement, healthcare, and humanitarian aid, to the mundane, like dating and shopping, artificial intelligence (AI) seems to be the answer to all our problems. AI is a catch-all phrase for a wide-ranging set of technologies most of which apply learning techniques from statistics to find patterns in large sets of data and make predictions based on those patterns. It seems like there are meetings every other week, organised by representatives from industry, government, academia, and civil society to address the perils of AI and formulate solutions to harness its potential. But who is driving the regulatory agenda and what do they stand to gain? This question needs to be answered because letting industry needs drive the AI agenda presents real risks.